Drug dosing is an important application of AI, which can be formulated as a Reinforcement Learning (RL) problem. In this paper, we identify two major challenges of using RL for drug dosing: delayed and prolonged effects of administering medications, which break the Markov assumption of the RL framework. We focus on prolongedness and define PAE-POMDP (Prolonged Action Effect-Partially Observable Markov Decision Process), a subclass of POMDPs in which the Markov assumption does not hold specifically due to prolonged effects of actions. Motivated by the pharmacology literature, we propose a simple and effective approach to converting drug dosing PAE-POMDPs into MDPs, enabling the use of the existing RL algorithms to solve such problems. We validate the proposed approach on a toy task, and a challenging glucose control task, for which we devise a clinically-inspired reward function. Our results demonstrate that: (1) the proposed method to restore the Markov assumption leads to significant improvements over a vanilla baseline; (2) the approach is competitive with recurrent policies which may inherently capture the prolonged effect of actions; (3) it is remarkably more time and memory efficient than the recurrent baseline and hence more suitable for real-time dosing control systems; and (4) it exhibits favorable qualitative behavior in our policy analysis.
translated by 谷歌翻译
Learning policies from fixed offline datasets is a key challenge to scale up reinforcement learning (RL) algorithms towards practical applications. This is often because off-policy RL algorithms suffer from distributional shift, due to mismatch between dataset and the target policy, leading to high variance and over-estimation of value functions. In this work, we propose variance regularization for offline RL algorithms, using stationary distribution corrections. We show that by using Fenchel duality, we can avoid double sampling issues for computing the gradient of the variance regularizer. The proposed algorithm for offline variance regularization (OVAR) can be used to augment any existing offline policy optimization algorithms. We show that the regularizer leads to a lower bound to the offline policy optimization objective, which can help avoid over-estimation errors, and explains the benefits of our approach across a range of continuous control domains when compared to existing state-of-the-art algorithms.
translated by 谷歌翻译
The core operation of current Graph Neural Networks (GNNs) is the aggregation enabled by the graph Laplacian or message passing, which filters the neighborhood information of nodes. Though effective for various tasks, in this paper, we show that they are potentially a problematic factor underlying all GNN models for learning on certain datasets, as they force the node representations similar, making the nodes gradually lose their identity and become indistinguishable. Hence, we augment the aggregation operations with their dual, i.e. diversification operators that make the node more distinct and preserve the identity. Such augmentation replaces the aggregation with a two-channel filtering process that, in theory, is beneficial for enriching the node representations. In practice, the proposed two-channel filters can be easily patched on existing GNN methods with diverse training strategies, including spectral and spatial (message passing) methods. In the experiments, we observe desired characteristics of the models and significant performance boost upon the baselines on 9 node classification tasks.
translated by 谷歌翻译
Using massive datasets to train large-scale models has emerged as a dominant approach for broad generalization in natural language and vision applications. In reinforcement learning, however, a key challenge is that available data of sequential decision making is often not annotated with actions - for example, videos of game-play are much more available than sequences of frames paired with their logged game controls. We propose to circumvent this challenge by combining large but sparsely-annotated datasets from a \emph{target} environment of interest with fully-annotated datasets from various other \emph{source} environments. Our method, Action Limited PreTraining (ALPT), leverages the generalization capabilities of inverse dynamics modelling (IDM) to label missing action data in the target environment. We show that utilizing even one additional environment dataset of labelled data during IDM pretraining gives rise to substantial improvements in generating action labels for unannotated sequences. We evaluate our method on benchmark game-playing environments and show that we can significantly improve game performance and generalization capability compared to other approaches, using annotated datasets equivalent to only $12$ minutes of gameplay. Highlighting the power of IDM, we show that these benefits remain even when target and source environments share no common actions.
translated by 谷歌翻译
Reinforcementlearning(RL)folkloresuggeststhathistory-basedfunctionapproximationmethods,suchas recurrent neural nets or history-based state abstraction, perform better than their memory-less counterparts, due to the fact that function approximation in Markov decision processes (MDP) can be viewed as inducing a Partially observable MDP. However, there has been little formal analysis of such history-based algorithms, as most existing frameworks focus exclusively on memory-less features. In this paper, we introduce a theoretical framework for studying the behaviour of RL algorithms that learn to control an MDP using history-based feature abstraction mappings. Furthermore, we use this framework to design a practical RL algorithm and we numerically evaluate its effectiveness on a set of continuous control tasks.
translated by 谷歌翻译
抽象已被广泛研究,以提高增强学习算法的效率和概括。在本文中,我们研究了连续控制环境中的抽象。我们将MDP同态的定义扩展到连续状态空间中的连续作用。我们在抽象MDP上得出了策略梯度定理,这使我们能够利用环境的近似对称性进行策略优化。基于该定理,我们提出了一种能够使用Lax Bisimulation Mimulation Mimulation Mimulation Mimulation Mimulation Mimulation Mimulation Mimulation Mimulation Mimulation Mimulation Mimulation Mimulation Mimulation Mimulation Mimulation Mimulation Mimulation Mimulation Mimulation Mimulation Mimulation。我们证明了我们方法对DeepMind Control Suite中基准任务的有效性。我们的方法利用MDP同态来表示学习的能力会导致从像素观测中学习时的性能。
translated by 谷歌翻译
在基于模型的强化学习中,代理可以利用学习的模型来改善其行为方式。两种普遍的方法是决策时间计划和背景计划。在这项研究中,我们有兴趣了解在什么条件下以及在哪种情况下,这两种计划样式之一的性能将比需要快速响应的域中的其他域名更好。在通过动态编程的视角查看它们之后,我们首先考虑了这些计划样式的经典实例,并提供了理论结果和假设,在纯计划,计划和学习以及转移学习设置中,他们的表现更好。然后,我们考虑了这些计划样式的现代实例,并提供了在最后两个考虑的设置中表现更好的假设。最后,我们执行几个说明性实验,以验证我们的理论结果和假设。总体而言,我们的发现表明,即使决策时间计划在其经典实例中的表现不如其现代实例化,但在计划和学习和转移学习环境中,它的表现也比背景计划更好或更好。
translated by 谷歌翻译
在这项工作中,我们研究了Bellman方程作为价值预测准确性的替代目标的使用。虽然Bellman方程在所有状态行动对上都是由真实值函数唯一求解的,但我们发现Bellman误差(方程式两侧之间的差异)对于值函数的准确性而言是一个差的代理。特别是,我们表明(1)由于从钟声方程的两侧取消,贝尔曼误差的幅度仅与到达真实值函数的距离相关,即使考虑了所有状态行动对,并且(2)在有限的数据制度中,可以通过无限多种次优的解决方案来完全满足钟手方程。这意味着可以最大程度地减少Bellman误差,而无需提高价值函数的准确性。我们通过一系列命题,说明性玩具示例和标准基准域中的经验分析来证明这些现象。
translated by 谷歌翻译
钢筋学习中的时间抽象是代理学习和使用高级行为的能力,称为选项。选项 - 批评架构提供了一种基于渐变的端到端学习方法来构建选项。我们提出了一种基于关注的扩展到该框架,这使得代理能够学会在观察空间的不同方面上集中不同的选择。我们表明这导致行为各种选项,这些选项也能够出现状态抽象,并防止在选项 - 评论家中发生的选项统治和频繁选项切换的退化问题,同时实现类似的样本复杂性。我们还通过不同的转移学习任务,展示了学习选项的更有效,可意识形态和可重复使用的性质。实验结果在一个相对简单的四室环境和更复杂的ALE(街机学习环境)展示了我们方法的功效。
translated by 谷歌翻译
深度加强学习(RL)是解决复杂的现实问题的强大框架。在框架中使用的大型神经网络传统上与更好的泛化能力相关,但它们的增加的大小需要广泛的培训持续时间,大量硬件资源和较长推理时间的缺点。解决这个问题的一种方法是修剪神经网络,只留下必要的参数。用于在固定数据分布的应用中施加稀疏性的最先进的并发修剪技术。但是,他们尚未在RL的背景下大大探索。我们缩小了RL和单次修剪技术之间的差距,并将一般修剪方法呈现给离线RL。在RL培训开始之前,我们利用固定数据集进行修剪神经网络。然后,我们运行不同网络稀疏度水平的实验,并评估连续控制任务中的初始化技术修剪的有效性。我们的结果表明,随着95%的网络权重修剪,离线-RL算法仍然可以在我们的大部分实验中保持性能。据我们所知,没有先前的工作,利用在这种高水平的稀疏性的RL保留的性能中进行修剪。此外,在未改变学习目标的情况下,可以在任何现有的离线-RL算法中容易地集成到任何现有的离线RL算法中。
translated by 谷歌翻译